# Dense vector retrieval
Intention
Apache-2.0
The GTE multilingual base model is a dense sentence transformer that supports sentence similarity calculation and text embedding tasks in multiple languages.
Text Embedding
Transformers Supports Multiple Languages

I
leeloolee
32
3
Bert Base 1024 Biencoder 64M Pairs
A long-context bi-encoder based on MosaicML's pre-trained BERT with 1024 sequence length, for sentence and paragraph embeddings
Text Embedding
Transformers Supports Multiple Languages

B
shreyansh26
19
0
Bert Base 1024 Biencoder 6M Pairs
A long-context bi-encoder based on MosaicML's pre-trained BERT with 1024 sequence length, designed for generating 768-dimensional dense vector representations of sentences and paragraphs
Text Embedding
Transformers Supports Multiple Languages

B
shreyansh26
24
0
Sentence Transformers Gte Base
This is a sentence embedding model based on sentence-transformers, capable of mapping sentences and paragraphs into a 768-dimensional vector space, suitable for tasks such as semantic search and clustering.
Text Embedding
S
embaas
43
0
Semantic Xlmr Bn
A multilingual sentence embedding model optimized for Bengali, mapping text to a 768-dimensional vector space
Text Embedding
Transformers Other

S
afschowdhury
225
1
Test Food
This is a sentence-transformers based model that can map sentences and paragraphs into a 768-dimensional dense vector space, suitable for tasks like sentence similarity computation and semantic search.
Text Embedding
Transformers

T
Linus4Lyf
42
0
Featured Recommended AI Models